Technalysis Research
 
Previous Blogs

June 3, 2014
Apple Drives Vision of Seamless Multi-Device Computing

May 27, 2014
Surface Pro 3: The Future of PCs?

May 22, 2014
Insider Extra: SanDisk: The Many Faces of Flash

May 20, 2014
The Technological Divining Rod

May 13, 2014
Computing in the Cloud

May 6, 2014
Device Usage a Question of Degree

April 29, 2014
The Next Smartphone Battleground: Durability

April 22, 2014
BYOD: A Work in Progress

April 18, 2014
Insider Extra: AMD Back in the Groove

April 15, 2014
The Mobility Myth

April 9, 2014
BYOD Dilemma: Devices vs. Data

April 8, 2014
Insider Extra: Qualcomm's Evolving Story

April 1, 2014
A Wearables Forecast

March 25, 2014
Measuring Success in Wearables? It's Thousands of Thousands

March 24, 2014
Insider Extra: Intel Strategy Moves Forward

March 18, 2014
IOT: Islands of Isolated Things?

March 11, 2014
Wearables Cautionary Tale

March 4, 2014
The New Platform Battle

February 25, 2014
Watch What Happens

February 18, 2014
Talkin' 'bout Touchpads

February 11, 2014
The MultiOS Conundrum

February 4, 2014
Computing Redefined

January 28, 2014
The Apple Problem

January 21, 2014
The 2-in-1s People Might Want

January 14, 2014
The Post Tablet Era

January 7, 2014
The Innovation Asymptote

December 31, 2013
Top 5 2014 Predictions

December 17, 2013
Holiday Shoppers Gifting Themselves

December 10, 2013
Companion Apps

December 3, 2013
Aisle Check
















TECHnalysis Research Blog

June 10, 2014
Screen Overload To Drive Screen-less Devices

As we move into an increasingly connected world, where the number of devices we own and use continues to rise and the activities we’re trying to track and control continues to expand, there’s at least one obvious challenge confronting both the industry and us. Where do we get shown the information/content we want to see?

While that may seem like of a bit of an odd or even naïve question, I think it could become a very important one. Plus, I believe the answer to it will have a number of important implications on the development of new technologies and new devices, particularly in burgeoning areas like wearables and home automation.

Of course, the obvious answer to the question would be on the screen of whatever new device we purchase for that particular tracking or controlling application. After all, screen display technologies continue to improve and expand at a rapid rate. Plus, as many modern device categories evolve, it’s often the screens themselves that are both the main hardware attraction as well as the center of our attention.

But I’m starting to wonder how far we can really take that argument. Does every new device we purchase really need to have its own dedicated screen? I’m sure my old friends and colleagues in the display industry won’t be happy to hear this, but I think we could soon reach a point of diminishing returns when it comes to adding big, beautiful displays to all our new devices.

To put it more practically, do I need to put a great display on a wearable device or a home automation gateway or any of a number of other interesting Internet of Things (IOT)-related devices that I expect we’ll see introduced over the next several years? My take is, no, probably not.

Of course, some might argue there’s isn’t so much a limit on the screens we need as the number of devices themselves. But as much as I would like to think that there’ll be an increasing degree of device consolidation and a desire for consumers to reduce the numbers of devices they own and use, I see absolutely no evidence to suggest that possibility. In fact, the number of devices per person just continues to increase.

So, what does this mean? I believe it means we need and will start to see more developments that leverage the incredible set of screens we already have. Between big-screen HDTVs, higher resolution PC monitors, notebooks, tablets and smartphones, a large percentage of people in the US already have access to relatively wide choice of screen sizes, most all of which have HD-level resolutions (if not higher).

The challenge is that you can’t easily connect to and/or “take over” those screens from other devices. Most people are unlikely to want to use cables from newer devices—especially ones that are likely to be small—so the only realistic option is for wireless connections. To do that, you need some kind of intelligence in both the sending and receiving devices as well as agreed upon standards.

For years, there were several different competing wireless video standards, some from the CE industry and others from the PC industry, but most of them have now fallen by the wayside. Two of the last survivors—Intel’s WiDi technology and the WiFi Consortium’s Miracast—have essentially merged into a single standard as of this time last year, allowing a wide range of devices from Windows-based PCs to Android-based smartphones and tablets to connect to a select set of Miracast or WiDi-enabled TVs. (Unfortunately, backwards capability with legacy devices, including early implementations of either Miracast or WiDi, isn’t always great.)

As with many areas, Apple has its own standard for wireless video connections called AirPlay. For iOS-based devices, in particular, AirPlay enables applications like sending video from an iPad to an AppleTV device plugged into a larger TV.

In the emerging worlds of wearables and other IOT-type applications, however, it’s not clear how connections between those devices and the likely screen targets of smartphones and tablets are going to work. Right now, many of the wearables and IOT devices function as dedicated accessories to host devices but, in several cases, the range of host OSes supported is very limited.

The problem that vendors face is essentially a philosophical question about the nature of each device. If it includes a reasonable size screen, it’s a standalone device and if it doesn’t, it’s essentially an accessory. While it’s tempting to think that each device should be able to function as a standalone master device, I think consumers could tire of too many “masters.” Instead, accessorizing a few primary devices, particularly the smartphone, could prove to be a more fruitful path.

As vendors start to offer a wider range of devices and consumers try to integrate these new options into their existing device collections, the need for more and better adoption of screen-sharing technologies will become quickly evident. In fact, I’d argue that we could even see faster adoption of new technologies if there were easier ways to share screens because doing so will make consumers feel like the devices work with their existing equipment and that, in turn, will encourage adoption of them.

The problem now is that few vendors are spending much time or effort on screen sharing. But if the Internet of Things is truly to take hold, the ability for screen-less devices to leverage existing displays will be a critical enabling technology. So, let’s hope we start to see more screen sharing soon.

Here's a link to the original column: http://techpinions.com/screen-overload-to-drive-screen-less-devices/31452

Podcasts
Leveraging more than 10 years of award-winning, professional radio experience, TECHnalysis Research participates in a video-based podcast called Everything Technology.
LEARN MORE
  Research Schedule
A list of the documents that TECHnalysis Research plans to publish in 2015 can be found here.
READ MORE